Gloucester, Barton cemetery - radiocarbon dating and chronological modelling
Author
Affiliation
Peter Marshall
Chronologies
Published
July 16, 2025
Abstract
Chronological modelling of the available radiocarbon & coin dates from the Gloucester, Barton cemetery was undertaken following preliminary analysis of the dated inhumations diets indicated they all had a terrestiral diet. A series of simulation models were then run, to explore the effect of radiocarbon dating individuals from Block M that had been selcted for isotopic & aDNA analysis. The results of the simulation indicate that the submisison of further samples for radiocarbon dating is not going to improve the precision of estimates for the chronology of the Block M part of the cemetery.
Between June 2013 and December 2018 Cotswold Archaeology (CA) carried out an archaeological investigation of the former Gloscat Media Studies site, Brunswick Road, Gloucester. The fieldwork on the Media Studies site occurred in two phases. The first phase of excavation covered 0.18ha and took place between June 2013 and November 2014. An additional 0.6ha in the northern corner of the development site (known as Block M) was investigated between January 2017 and December 2018 (the two excavation areas were contiguous).
1.1 Radiocarbon dating
Fourteen radiocarbon measurements are available from the Gloscat Media Studies site (n=10) and Block M (n=4). The Gloscat Media Studies samples were analysed during March and May 2015 at Scottish Universities Environmental Research Centre (SUERC; Table 1). The Block M samples were analysed between November 2020 and February 2021 at the Bristol Radiocarbon Accelerator Mass Spectrometry (BRAMS) Facility.
Details of the dated samples, radiocarbon ages, and associated stable isotopic measurements are provided in Table 1. The radiocarbon results are conventional radiocarbon ages (Stuiver and Polach 1977). All SUERC they were corrected for fractionation using δ13C values measured during the dating process by Accelerator Mass Spectrometry (AMS), and at BRAMS using the δ13C value measured by Isotope Ratio Mass Spectrometry (IRMS). In addition, δ13C values and δ15N values have been obtained on sub-samples of the dated material by Isotope Ratio Mass Spectrometry, as these results more accurately reflect the natural isotopic composition of the sampled material.
The pretreatment, combustion, graphitisation, and measurement by AMS of the sample dated at the Scottish Universities Environmental Research Centre (SUERC-) followed the methods outlined in (Dunbar et al. 2016) while those processed and dated by AMS at BRAMS followed the methods described in (Knowles, Monaghan, and Evershed 2019) and (Synal, Stocker, and Suter 2007)
Diet-induced offsets may arise when an animal consumes carbon from a reservoir that is not in equilibrium with the terrestrial biosphere Lanting and Plicht (1998). Should one of the dietary reservoir sources possess an intrinsic radiocarbon offset, the animal will incorporate a proportion of radiocarbon that deviates from atmospheric equilibrium. Radiocarbon ages derived from such sources, if incorrectly calibrated using a solely terrestrial calibration curve, will yield anomalously old age estimates.
Figure 1 and Figure 2 summarise the δ13C and δ15N values from human bone radiocarbon samples. These indicate that the dated individuals consumed a terrestrial based diet and the radiocarbon results are unlikely to be affected by any significant reservoir effects, so a fully terrestrial calibration curve can be employed.
Figure 1: Bivariate plot of δ13C and δ15N stable isotopic values of bone collagen
Code
# Load required librarieslibrary(readxl)library(dplyr)library(ggplot2)# Read the Excel file# Change *.xlsx to the name of your Excel file# Change sheet= to the name of the sheet with the data on it you want to plotgloscat <-read_excel("Gloscat.xlsx", sheet ="Data") # change data to name of site name or something else# Filter for human bone samples with complete isotope data (use spreadsheet template otherwise change column headings)human_bone_complete <- gloscat %>%# change human_bone_complete to site name or something elsefilter(Material =="Human bone",!is.na(d13C), !is.na(d13Cerror),!is.na(d15N), !is.na(d15Nerror)) %>%select(LabCode, d13C, d13Cerror, d15N, d15Nerror)# View the filtered dataprint(human_bone_complete) # if you have changed human_bone_complete change hereprint(paste("Number of complete samples:", nrow(human_bone_complete))) # Create δ13C vs δ15N bivariate plot with error barsisotope_plot <-ggplot(human_bone_complete, aes(x = d13C, y = d15N)) +# if you have changed human_bone_complete change here# Add error barsgeom_errorbar(aes(ymin = d15N - d15Nerror, ymax = d15N + d15Nerror), width =0.1, color ="gray50", alpha =0.7) +geom_errorbarh(aes(xmin = d13C - d13Cerror, xmax = d13C + d13Cerror), height =0.1, color ="gray50", alpha =0.7) +# Add data pointsgeom_point(size =3, color ="darkblue", alpha =0.8) +# Add labels for each pointgeom_text(aes(label = LabCode), vjust =-0.5, hjust =0.5, size =3, color ="black") +# Customize the plotlabs(title ="Gloucester: Barton cemetery",subtitle =expression(paste("Human bone collagen (radiocarbon)")),x =expression(paste(delta^13, "C (per mille)")),y =expression(paste(delta^15, "N (per mille)"))) +theme_classic() +theme(plot.title =element_text(size =14, hjust =0.5),plot.subtitle =element_text(size =12, hjust =0.5),axis.title =element_text(size =12),axis.text =element_text(size =10),panel.grid.minor =element_blank())# Display the plotprint(isotope_plot)# Save the plotggsave("isotope_bivariate_plot.png", isotope_plot, width =8, height =6, dpi =300)
Figure 2: Bivariate plot of δ13C and δ15N stable isotopic values of bone collagen, together with potential sources
Code
#Add Food source ellipses & remove individual data labels# Read the Excel file for food source datagloscat_sources <-read_excel("Gloscat.xlsx", sheet ="Baseline") # change data to name of site name or something else# create plot without individual data labelsisotope_plot2 <-ggplot(human_bone_complete, aes(x = d13C, y = d15N)) +# if you have changed human_bone_complete change here# Add error barsgeom_errorbar(aes(ymin = d15N - d15Nerror, ymax = d15N + d15Nerror), width =0.1, color ="gray50", alpha =0.7) +geom_errorbarh(aes(xmin = d13C - d13Cerror, xmax = d13C + d13Cerror), height =0.1, color ="gray50", alpha =0.7) +# Add data pointsgeom_point(size =3, color ="darkblue", alpha =0.8) +# Add labels for each point# Customize the plotlabs(title ="Gloucester: Barton cemtery",subtitle =expression(paste("Human bone collagen (radiocarbon)")),x =expression(paste(delta^13, "C (per mille)")),y =expression(paste(delta^15, "N (per mille)"))) +theme_classic() +theme(plot.title =element_text(size =14, hjust =0.5),plot.subtitle =element_text(size =12, hjust =0.5),axis.title =element_text(size =12),axis.text =element_text(size =10),panel.grid.minor =element_blank())# add food source ellipsesisotope_plot2<- isotope_plot2 +stat_ellipse(data = gloscat_sources,aes(x = d13C, y = d15N, color = Source, fill = Source),type ="norm", level =0.68, # standard ellipse ≈ 1 SDgeom ="polygon", alpha =0.2, show.legend =TRUE)# Display the plotprint(isotope_plot2)# Save the plotggsave("isotopeplot2.png", isotope_plot2, width =8, height =6, dpi =300)#if you want to see all the food source data use thisisotope_plot3<-isotope_plot2 +geom_point(data = gloscat_sources,aes(x = d13C, y = d15N, color = Source),shape =21, size =3, fill ="white")# Display the plotprint(isotope_plot3)# Save the plotggsave("isotopeplot3.pnd", isotope_plot3, width =8, height =6, dpi =300)
1.3 Chronological modelling
The chronological modelling presented here has been undertaken using OxCal 4.4 (Bronk Ramsey 2009), and the internationally agreed calibration curve for the northern hemisphere (IntCal20; Reimer et al. (2020)). The models are defined by the OxCal keywords and brackets on the left-hand side of Figure 3 & Figure 4 and by the CQL2 code provided below. In the figures, calibrated radiocarbon dates are shown in outline, and the posterior density estimates produced by the chronological modelling are shown in solid black. The other distributions correspond to aspects of the model. For example, the distribution FirstBurialBlockM (Figure 4) is the posterior density estimate for when the first dated individual in block M was interred. In the text and tables highest posterior density intervals, which describe the posterior distributions, are given in italics.
Figure 3: Probability distributions of dates from Gloucester: Barton cemetery. Each distribution represents the relative probability that an event occurs at a particular time. For each of the dates two distributions have been plotted: one in outline, which is the result of simple radiocarbon calibration, and a solid one, based on the chronological model used. The other component section of this model are shown in detail in Figure 4. The large square brackets down the left-hand side along with the OxCal keywords define the overall model. exactly.
Figure 4: Probability distributions of dates from Gloucester: Barton cemetery. The format is identical to that of Figure 3 apart from the result followed by ‘?’ that has been excluded from the model. The large square brackets down the left-hand side along with the OxCal keywords define the overall model.
Expand to get the OxCal code for the model shown in Figures 3 and 4
We have constructed simulation models to determine the potential of further radiocarbon determinations from Gloucester Barton cemetery for improving the precision of estimates for the chronology of the cemetery.
The components of a simulation model are those of any model. First the available informative prior beliefs are established, from the model of existing dates (see Figure 3 & Figure 4) and from the stratigraphic matrix of suitable samples. After this, radiocarbon dates can be simulated from the pool of suitable datable material and the appropriate prior information incorporated into the simulation model. Errors on the measurements are estimated from those recently obtained by the selected laboratory on similar material of similar age. In this process the actual date of the site has to be fed into the model, which is done on the basis of our existing understanding of the site chronology. Multiple models can be run for different actual ages and for different sampling strategies to see which approach might be most effective. Some examples of the simulations created during the process of identifying the number of samples required for this project are shown in Figure 2.
2.2 Methodology
Calendar dates for each model were obtained using random number generation. This is a process of creating a sequence of numbers that don’t follow any predictable pattern. They are widely used in simulations, cryptography and statistical modelling. We used the sample() function (2024)( to generate random integers (i.e. the dates) by sampling from a specified range. Below is the syntax:
Where: • x: Vector of numbers to sample from (eg AD 250–350);
• size: Number of values to return (15: the number of inhumations being sample for isotope & aDNA));
• replace: Whether sampling should be with replacement (FALSE).
We then used this code as part of a large piece of code (see below) to generate a series of calendar dates for the R_Simulate function in OxCal (Bronk Ramsey 2009) for simulations from 15 dates down to 2 dates and incorporated them into a series of models. The models are based on the assumption that the Block M part of the cemetery was in continuous use for a period of time ((Buck, Litton, and Smith 1992)) [100 years], between AD 250 and AD 350. Time and resources, given this exercise required the running of OxCal 140 models, were limited and we therefore were unable to evaluate what potential differences in the duration of burial activity and when it began and finished made to our date estimate.
Code
library(openxlsx)library(readxl)library(writexl)library(dplyr)# Create a new workbookwb <-createWorkbook()# Loop from 15 dates down to 2 datesfor(n in15:2) {# Simulate n dates between AD 250 and AD 350 dates <-sample(250:350, n, replace =FALSE)# Debug: check what dates containsprint(paste("n =", n, "dates =", paste(dates, collapse =", ")))# Create a data frame df <-data.frame(Simulated_Dates = dates)# Create sheet name sheet_name <-paste0(n, "_simulated_dates")# Add worksheet and write dataaddWorksheet(wb, sheet_name)writeData(wb, sheet_name, df)}# Save the workbooksaveWorkbook(wb, "GlosBartonCemetery_simulated_dates.xlsx", overwrite =TRUE)
Figure 5 shows an extract of the full model, Block M, with the 15 simulated dates for burials shown (R_Simulate….). Each model was run 10 times.
Figure 5: Part of OxCal simulation: Block M with 15 simulated burial dates (R_Simulate 2 simulated dates AD 250-350)
3 Analysis of simulation outputs
3.1 Extracting output from OxCal models
We extracted output from the OxCal models using the R code (see below). Running each model ten times as part of one OxCal analysis results in a considerable amount of data; hopefully the code makes this process less time consuming.
Code
library(openxlsx)library(readxl)library(writexl)library(dplyr)# R code to extract @FirstBurialBlockM & @LastBurialBlockM estimates from Barton Cemetery simulation file# This is an example for extracting the parameters @FirstBurialBlockM and @LastBurialBlockM from a single OxCal files (10 models) with n simulated dates, You need to repeat for each OxCal model# Read the file (suppress warning about incomplete final line)file_path <-"GlosBartonCemeterySimulation06.txt"#You will need to change the file namedata_lines <-readLines(file_path, warn =FALSE)# Extract lines containing @FirstBurialBlockMburial_lines <- data_lines[grep("@FirstBurialBlockM", data_lines)]# Display the found linescat("Found", length(burial_lines), "@FirstBurialBlockM entries:\n")print(burial_lines)# Extract numerical values from @FirstBurialBlockM lines# Initialize vectors for different columns based on the visible patternburial_data <-data.frame(line_number =integer(),estimate_1 =numeric(),estimate_2 =numeric(),estimate_3 =numeric(),estimate_4 =numeric(),stringsAsFactors =FALSE)# Process each @FirstBurialBlockM linefor (i inseq_along(burial_lines)) { line <- burial_lines[i]# Extract all numbers from the line numbers <-regmatches(line, gregexpr("-?\\d+\\.?\\d*", line))[[1]] numbers <-as.numeric(numbers)# Create a row for the data frame (pad with NA if fewer than 4 values) row_data <-c(i, numbers, rep(NA, 4-length(numbers)))[1:5] burial_data <-rbind(burial_data, row_data)}# Set proper column namescolnames(burial_data) <-c("Run", "68%start", "68%end", "95%start", "95%end")# Display the structured datacat("\nStructured burial estimates:\n")print(burial_data)burial_data <- burial_data %>%mutate(range_68 =`68%end`-`68%start`,range_95 =`95%end`-`95%start`)print(burial_data)# Save the datawrite_xlsx(burial_data, "BartonBlockM06starts.xlsx")# PART 2# Extract lines containing @LastBurialBlockMburial_lines2 <- data_lines[grep("@LastBurialBlockM", data_lines)]# Display the found linescat("Found", length(burial_lines2), "@LastBurialBlockM entries:\n")print(burial_lines)# Extract numerical values from @FirstBurialBlockM lines# Initialize vectors for different columns based on the visible patternburial_data2 <-data.frame(line_number =integer(),estimate_1 =numeric(),estimate_2 =numeric(),estimate_3 =numeric(),estimate_4 =numeric(),stringsAsFactors =FALSE)# Process each @LastBurialBlockM linefor (i inseq_along(burial_lines2)) { line <- burial_lines2[i]# Extract all numbers from the line numbers <-regmatches(line, gregexpr("-?\\d+\\.?\\d*", line))[[1]] numbers <-as.numeric(numbers)# Create a row for the data frame (pad with NA if fewer than 4 values) row_data <-c(i, numbers, rep(NA, 4-length(numbers)))[1:5] burial_data2 <-rbind(burial_data2, row_data)}# Set proper column namescolnames(burial_data2) <-c("Run", "68%start", "68%end", "95%start", "95%end")# Display the structured datacat("\nStructured burial estimates:\n")print(burial_data2)burial_data2 <- burial_data2 %>%mutate(range_68 =`68%end`-`68%start`,range_95 =`95%end`-`95%start`)print(burial_data2)# Save the datawrite_xlsx(burial_data2, "BartonBlockM06ends.xlsx")
3.2 Visualisation of output from OxCal models
Figure 6 and Figure 7 show a summary of the date estimate for the parameter FirstBlockMBurial. It is clear, particularly, on Figure 6 that the submission of more radiocarbon samples does not increase precision. In fact the submission of further samples could increase the bandwith of the date estimate (eg, 13 & 14 samples).
Figure 6: Summary of 140 simulation models; 10 models for 15….2 simulated dates AD 250-350, showing precision vs sample size for FirstBlockMBurial
Code
# Visualizing differences across multiple sample sizes (15, 14, 13...2 samples)library(ggplot2)library(dplyr)library(tidyr)library(readxl)library(gridExtra)library(viridis)# Read the datadata <-read_excel("Query.xlsx", sheet ="BlockMStarts") # change data to name of site name or something else# Extract sample size from Parameter column (assumes format like "15Start", "14Start", etc.)data$Sample_Size <-as.numeric(gsub("Start", "", data$Parameter))data$Sample_Size_Label <-paste0(data$Sample_Size, " samples")data$Model_Run <-ave(seq_along(data$Parameter), data$Parameter, FUN = seq_along)# Create a duration column (End - Start)data$Duration <- data$End - data$Start# TREND ANALYSIS - Shows how precision changes with sample sizesummary_stats <- data %>%group_by(Sample_Size) %>%summarise(Mean_Start =mean(Start),SD_Start =sd(Start),Mean_End =mean(End),SD_End =sd(End),Mean_Duration =mean(Duration),SD_Duration =sd(Duration),CV_Start =sd(Start)/mean(Start) *100, # Coefficient of variation.groups ='drop' )BlockMFirst <-ggplot(summary_stats, aes(x = Sample_Size)) +geom_line(aes(y = Mean_Start, color ="Start Date"), size =1.2, alpha =0.8) +geom_point(aes(y = Mean_Start, color ="Start Date"), size =3) +geom_ribbon(aes(ymin = Mean_Start - SD_Start, ymax = Mean_Start + SD_Start, fill ="Start Date"), alpha =0.3) +geom_line(aes(y = Mean_End, color ="End Date"), size =1.2, alpha =0.8) +geom_point(aes(y = Mean_End, color ="End Date"), size =3) +geom_ribbon(aes(ymin = Mean_End - SD_End, ymax = Mean_End + SD_End, fill ="End Date"), alpha =0.3) +scale_color_manual(values =c("Start Date"="#2E86AB", "End Date"="#2E86AB")) +scale_fill_manual(values =c("Start Date"="#2E86AB", "End Date"="#2E86AB")) +scale_x_continuous(breaks =sort(unique(data$Sample_Size))) +labs(title ="FirstBlockMBurial vs precision vs sample size",subtitle ="Mean estimates with ±1 SD ribbons",x ="Number of Samples",y ="Posterior density estimate (cal AD)",color ="Boundary",fill ="Boundary") +theme_classic() +theme(legend.position ="none")ggsave("BlockMFirst.png", BlockMFirst, width =8, height =6, dpi =300)
Figure 7: Summary of 140 simulation models; 10 models for 15….2 simulated dates AD 250-350, showing bandwidth for FirstBlockMBurial
Code
# Visualizing differences across multiple sample sizes (15, 14, 13...2 samples)library(ggplot2)library(dplyr)library(tidyr)library(readxl)library(gridExtra)library(viridis)# Read the datadata <-read_excel("Query.xlsx", sheet ="BlockMStarts") # change data to name of site name or something else# Extract sample size from Parameter column (assumes format like "15Start", "14Start", etc.)data$Sample_Size <-as.numeric(gsub("Start", "", data$Parameter))data$Sample_Size_Label <-paste0(data$Sample_Size, " samples")data$Model_Run <-ave(seq_along(data$Parameter), data$Parameter, FUN = seq_along)# Create a duration column (End - Start)data$Duration <- data$End - data$Start# TIMELINE VISUALIZATION - Shows date ranges across all sample sizesdata$Sample_Group <-factor(data$Sample_Size)data$y_position <-as.numeric(factor(paste0(data$Sample_Size, "_", data$Model_Run)))BlockMFirstBurial <-ggplot(data, aes(y =reorder(paste0(data$Sample_Size, " samples - Run ", data$Model_Run), -y_position))) +geom_segment(aes(x = Start, xend = End, color =factor(Sample_Size)), size =2, alpha =0.7) +geom_point(aes(x = Start, color =factor(Sample_Size)), size =1.5) +geom_point(aes(x = End, color =factor(Sample_Size)), size =1.5) +scale_color_viridis_d(option ="plasma", direction =-1, name ="Sample Size") +labs(title ="Barton Cemetery: FirstBurialBlockM by addional 14C measurements",subtitle ="Bandwidth for each simulation",x ="Date (cal AD)",y ="Model Run") +theme_classic() +theme(axis.text.y =element_text(size =6),legend.position ="right") +guides(color =guide_legend(override.aes =list(size =3)))ggsave("BlockMFirstBurial.png", BlockMFirstBurial, width =8, height =6, dpi =300)
Figure 8 and Figure 9 show a summary of the date estimate for the parameter LastBlockMBurial. It is clear, particularly, on Figure 9 that the submission of an 8 additional, or more radiocarbon samples does increase precision, note how the two lines gets closer together. The increase in precision is though not signficant.
Figure 8: Summary of 140 simulation models; 10 models for 15….2 simulated dates AD 250-350, showing precision vs sample size for LastBlockMBurial
Figure 9: Summary of 140 simulation models; 10 models for 15….2 simulated dates AD 250-350, showing bandwidth for LastBlockMBurial
4 Conclusion
The submission of more samples from the Block M cemetery is no recommended, unless specific burials are identified as particularly waranting scientific dating, ie they are spatially isolated from the main burial group, have unsual pathologies or isotopic signitures.
The analysis outlined above has been undertaken without access to the full site records, ie matricies, startigraphic narrative and thus the conlusions may need to be revised should this infomtayion become avaialble int he future.
Buck, C. E., C. D. Litton, and A. F. M. Smith. 1992. “Calibration of Radiocarbon Results Pertaining to Related Archaeological Events.”Journal of Archaeological Science 19 (5): 497–512. https://doi.org/10.1016/0305-4403(92)90025-x.
Dunbar, E, G T Cook, P Naysmith, B G Tripney, and S Xu. 2016. “AMS 14C Dating at the Scottish Universities Environmental Research Centre (SUERC) Radiocarbon Dating Laboratory Corrigendum.”Radiocarbon 58 (1): 233–33. https://doi.org/10.1017/rdc.2016.14.
Knowles, Timothy D J, Paul S Monaghan, and Richard P Evershed. 2019. “Radiocarbon Sample Preparation Procedures and the First Status Report from the Bristol Radiocarbon AMS (BRAMS) Facility.”Radiocarbon 61 (5): 1541–50. https://doi.org/10.1017/rdc.2019.28.
Lanting, J. N., and J. Van Der Plicht. 1998. “Reservoir Effects and Apparent \(^{14}C-Ages\).”The Journal of Irish Archaeology 9: 151–65. http://www.jstor.org/stable/30001698.
R Core Team. 2024. “R: A Language and Environment for Statistical Computing.”https://www.R-project.org/.
Reimer, Paula J, William E N Austin, Edouard Bard, Alex Bayliss, Paul G Blackwell, Christopher Bronk Ramsey, Martin Butzin, et al. 2020. “The IntCal20 Northern Hemisphere Radiocarbon Age Calibration Curve (055 Cal kBP).”Radiocarbon 62 (4): 725–57. https://doi.org/10.1017/RDC.2020.41.
Synal, Hans-Arno, Martin Stocker, and Martin Suter. 2007. “MICADAS: A New Compact Radiocarbon AMS System.”Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 259 (1): 7–13. https://doi.org/https://doi.org/10.1016/j.nimb.2007.01.138.